Semiotic Machines

نویسنده

  • Winfried Nöth
چکیده

What is a semiotic machine? A robot, a computer running programs endowed with artificial intelligence, any computer, a simple calculating machine, or even an ordinary mechanical typewriter? The question will be examined in the light of Charles Sanders Peirce’s concept of semiosis, which requires reference to processes such as reasoning, translation, interpretation, control, self-control, autopoiesis, self-reference, creativity, as well as to the distinction between genuine semiosis and quasi-semiosis. In contrast to John Searle, who argues that computers are mindless Chinese boxes and hence necessarily nonsemiotic machines, Peirce, long before the advent of computers, showed on the one hand that machines can certainly participate in processes of quasi-semiosis, but on the other hand that human minds, to a certain degree, can also operate like mere machines. However, although genuine semiosis is not restricted to operations of the human mind since it occurs widely in other spheres of life or even prebiological evolution, it is in fact questionable whether machines produced by humans, can already be described as capable of triggering genuinely semiotic processes. 1 SYMBOLIC AND SEMIOTIC MACHINES The concept of symbolic machine has become a common metaphorical designation of the computer. Semioticians, especially computer semioticians, have reasons to generalize this designation to semiotic machine. But what is a semiotic machine? If it is just a machine involved in sign processing, a typewriter might perhaps also be called a semiotic machine; if it is a machine not only involved in sign processes, but also creating processes of sign production and interpretation (i.e., processes of semiosis), there may be doubts whether ordinary computers may called semiotic machines. 1.1 SYMBOLIC MACHINES In the 1950s, computer scientists came to the conclusion that computers are more than mere calculating machines; they should, instead, be conceived of as symbol processing machines (Newell 1980: 137; Nake 1998: 463). It was Allen Newell (1980) who introduced the term physical symbol system to characterize more generally systems not only capable of processing numbers, but also symbols. With his theory of physical symbol systems, Newell aimed at a theoretical bridge between the science of intelligent living beSpecial Issue on Computational Intelligence and Semiotics 81 S.E.E.D. Journal (Semiotics, Evolution, Energy, and Development) Queiroz, J. and Gudwin, R. (Guest Editors) ings, i.e., cognitive science, and the science of intelligent machines, i.e., computer science and Artificial Intelligence research. In a quite different sense, Sybille Krämer (1988) has introduced a theory of symbolic machines. According to Krämer’s definition, a symbolic machine is a device that exists, so to speak, only symbolically on paper, having no real physical embodiment. Such a machine in a merely metaphorical sense does therefore nothing but “transform sequences of symbols.” An example of such a “machine” is the algorithm for the multiplication of numbers in decimal notation. A computer, according to this definition, is not a symbolic machine at all, but a kind of metamachine, “a machine able to imitate any symbolic machine” (ibid.: 2-3). This paper will not be concerned with machines in a metaphorical sense, but with real symbol processing machines, such as the ones described by Newell. Notice, however, that the mathematical definition of the concept of ‘machine’ is applicable to machines both in the metaphorical and in the literal sense. A machine, according to this definition is a device that “determines a function from its input to its output” (Newell 1990: 65). 1.2 SIGN PROCESSING IN COMPUTERS From the point of view of general semiotics, the historical shift from machines that process only numbers and those that process also symbols was not as epoch-making as Newell’s study suggested. After all, numbers are nothing but a class of symbols, and operating with numbers is not radically distinct from operating with other symbols, as Peirce pointed out, when he observed: “Although not all reasoning is computing, it is certainly true that numerical computation is reasoning” (CP 2.56). Furthermore, computers do not only operate with symbols, but also with indexical and iconic signs (more precisely quasi-signs, see 2.). According to Charles Sanders Peirce, a symbol is a sign related to the object it designates according to “a law or a regularity” (CP 2.293). Both words and numbers belong to the subcategory of rhematic symbols. Most text processing programs in computers, e.g., have a thesaurus which offers synonyms for stylistic improvement. When the user makes use of the thesaurus, the computer correlates and produces rhematic symbols. Machines capable of symbol production in this sense have been known since the invention of the first symbolic machines by W. Stanley Jevons and Charles Babbage in the 19 century. These machines were logical machines: after the input of the premises, the user, by pushing a lever, obtained the conclusion as the automatic output (Peirce 1887; Ketner 1988; Krämer 1988: 128). They were thus not only able to produce rhematic symbols, but symbols of the category of the argument (Nöth 2000a: 67). Special Issue on Computational Intelligence and Semiotics 82 Indexical signs, which draw the interpreter’s attention to their object by an immediate spatial, temporal, or causal connection, are apparent in computer programming and text processing when the user is instructed by means of arrows, the cursor, or by commands such as assign, do, exit if or continue if (Newell 1980: 144-145). Iconic signs, which are based on a relationship of similarity between the sign and its object, also occur in text processing. Copy and paste is one of the most elementary computer operations which S.E.E.D. Journal (Semiotics, Evolution, Energy, and Development) Queiroz, J. and Gudwin, R. (Guest Editors) produce iconic signs. The mapping, modeling and even simulating of reality belong to the more complex forms of iconic representation of which computers are capable. 1.3 SEMIOTIC MACHINES AND MACHINE SEMIOSIS Hence, we will not only be concerned with the computer as a symbolic, but as a semiotic machine (Nake 1997: 32), a machine not restricted to the processing of symbols, but also involved in other sign processes. Our topic is thus machine semiosis, as defined by Andersen et al. (1997: 548), i.e., “sign processes within machines and between machines.” However, before we can adopt terms such as machine semiosis and semiotic machine, the nature of semiosis and of sign processing in general will have to be defined, and several distinctions between different kinds of sign processes will have to be made in which machines are involved. For example, the mediation of signs by means of machines must be distinguished from the nature of sign processing within machines. The semiotic field of sign processes ranging from technical devices to living systems has often been analyzed in terms of dualisms, such as tools vs. instruments, instruments vs. machines, and above all machines vs. living beings. Instead of affirming such dualisms, we will try in the following to describe this semiotic field from less to more complex semiotic systems as a gradual continuum from less complex to more complex processes of sign processing. Among the less complex processes are those merely mediated by instruments or technical devices such as a thermometer, a sun dial, a thermostat, or the system of an automatic traffic light. The most complex processes of semiosis occur in living systems. 2 SIGNS AND SEMIOSIS, QUASI-SIGNS, AND QUASI-SEMIOSIS There are many definitions and models of the sign, but in this paper, our guideline is the semiotics of Charles Sanders Peirce (Nöth 2000a: 62-64, 227). A sign, according to Peirce, is a material or merely mental phenomenon, related to a previous phenomenon, the object of the sign, and resulting in a further sign, the interpretant, which provides an interpretation of the first sign in relation to its object. Semiosis, in this perspective, is a dynamic process in which the sign, affected by its preceding object, develops its effect in the ensuing interpretant. The sign does not serve as a mere instrument of thought, but it develops a dynamics of its own which is in some way independent of an individual mind. Furthermore, semiosis is not restricted to sign production and interpretation in humans, and there is no dualism between mind and matter, but the theory of a continuity between both (Nöth 2002). Does this theory of continuity from matter to mind (synechism) imply that there is semiosis in matter, machines, and human minds? Special Issue on Computational Intelligence and Semiotics 83 S.E.E.D. Journal (Semiotics, Evolution, Energy, and Development) Queiroz, J. and Gudwin, R. (Guest Editors) 2.1 THE PARADOX OF THE SEMIOTIC MACHINE If we define semiotic with Peirce as “the doctrine of the essential nature and fundamental varieties of possible semiosis” (CP 5.488), semiosis as the “intelligent, or triadic action of a sign” (CP 5.472-73) which involves “a cooperation of three subjects, such as a sign, its object, and its interpretant” (CP 5.484), and if we accept Peirce’s “provisional assumption that the interpretant is [...] a sufficiently close analogue of a modification of consciousness” (CP 5.485), the idea of a semiotic machine must appear a contradiction in terms. Semiotic, according to such premises, seems to presuppose living organisms as sign producers and sign interpreters. Whether the “action of the sign” can also develop in machines or whether semiosis does in fact presuppose life is the problem to be examined in the following on the basis of Peirce’s semiotics. No doubt, machines are involved in sign processes. With its capacity for data processing, the computer is certainly a machine operating with signs, but many other machines are also involved in sign processes. Typewriters, copy machines, cameras, and tape recorders, e.g., are machines which produce signs. Are they semiotic machines? If semiosis is required, a copy machine can certainly not be called a semiotic machine although it may be said to produce signs. After all, a pencil is also involved in sign production, but it can hardly be considered to be the sufficient cause of an interpretant. In spite of his criteria of semiosis, which suggest life as a prerequisite of semiosis, Peirce (1887), who often used the term “logic” as a synonym of “semiotic,” outlined a theory of “logical machines” (without calling them “semiotic machines”) long before the invention of Artificial Intelligence (Ketner 1988; Skagestad 1993, 1999; Tiercelin 1993). More than a century ago, he discussed the “logical machines” invented by Jevons and Marquand and concluded that these devices as well as the calculating machines of his times were “reasoning machines.” Since reasoning seems to be a process of semiosis, we might conclude that these machines were semiotic machines. However, Peirce suggests that they are not, when he goes so far as to conclude that “every machine is a reasoning machine” (ibid.: 168). Is reasoning then possible without semiosis? Elsewhere Peirce gives the answer: a machine, such as the Jacquard loom, although capable of reasoning according to the above premises, is not capable of “the triadic production of the interpretant” and operates hence only as a quasi-sign (CP 5.473). 2.2 MECHANICAL SIGN PROCESSING AS QUASI-SEMIOSIS The term quasi-sign suggests an answer to the question whether there can be semiosis in a machine of the kind which Peirce knew. A quasi-sign is only in certain respects like a sign, but it does not fulfil all criteria of semiosis. While some criteria of semiosis may be present in machines, others are missing. The concept of quasi-sign thus suggests degrees of semioticity. Quasi-semiosis does not only begin with calculating machines. It can be found in processes in which much simpler instruments are involved. Special Issue on Computational Intelligence and Semiotics 84 Among the instruments to which Peirce ascribes a quasi-semiotic function is a thermostat “dynamically connected with the heating and cooling apparatus, so as to check either effect.” The automatic indication of temperature which occurs in the thermostat is only an instance of “automatic regulation” and does not create an interpretant as its “sigS.E.E.D. Journal (Semiotics, Evolution, Energy, and Development) Queiroz, J. and Gudwin, R. (Guest Editors) nificate outcome,” Peirce argues (CP 5.473). There is no genuine index, but only a quasiindex, no semiosis, but only quasi-semiosis. Quasi-semiosis, in the case of the thermostat, is thus the reduction (“degeneration” is Peirce’s term) of a triadic sign process involving a sign (representamen), affected by an object, and creating an interpretant, to a merely dyadic process with only a sign affected by its object. The difference between the two kinds of processes is apparent, when Peirce compares the mechanical ‘quasi-interpretation’ of the temperature indicated by the thermostat with a mental interpretation of a temperature indicated by a thermometer: The acceleration of the pulse is probably a symptom of fever, and the rise of the mercury in an ordinary thermometer [...] is an index of an increase of atmospheric temperature, which, nevertheless, acts upon it in a purely brute and dyadic way. In these cases, however, a mental representation of the index is produced, which mental representation is called the immediate object of the sign; and this object does triadically produce the intended, or proper, effect of the sign strictly by means of another mental sign (CP 5.473). Thus, when the machine reacts causally to the temperature indicated by the thermostat, it does not interpret it. There is no genuine semiosis, but the signal indicating the temperature by which it is causally affected functions only as a quasi-index, and the mechanical reaction of the machine elicited by this quasi-index is only a process of quasisemiosis. Cause and effect constitute a dyadic relationship. Only when an interpretant is created to interpret this dyad of cause and effect on its own does semiosis begin to take place (see also 5.). 2.3 SIGN PROCESSING IN COMPUTERS AS QUASI-SEMIOSIS Evidence of the quasi-semiotic nature of data processing comes from the dyadic nature of the signs involved. The view that sign processing in computers is based on dyadic relationships is implicit in a widely held theory which states that computers can only process signals (Nake 1997: 33), i.e., mechanical stimuli followed by automatic reactions. Winograd & Flores (1986: 86-87), e.g., refer to signal processing when they write: “One could describe the operations of a digital computer merely as a sequence of electrical impulses traveling through a complex net of electronic elements, without considering these impulses as symbols for anything.” Consider the three examples of iconic, indexical, and symbolic sign processing discussed above: ‘copy-and-paste,’ ‘exit-if,’ or ‘give-synonymof.’ The processes involved clearly constitute dyadic relations between signs within the computer. In fact, when Newell (1990: 74-75) describes symbol processing in computers as a process relating two physical symbols, X and Y, where X provides “access to the distal structure Y,” which is “transported by retrieval from the distal location to the local site,” he gives a good account of dyadic processes of quasi-semiosis. What is missing for these signs to develop from dyadic to triadic signs is an object relationship. The dyadic relations are merely dyadic relations of signification, but there is no denotation, no ‘window to the world’ relating the sign to an object of experience (Nöth 1997: 209-210). Hence, we have to conclude that the iconic, indexical, and symbolic signs with which the computer operates are quasi-signs. Special Issue on Computational Intelligence and Semiotics 85 S.E.E.D. Journal (Semiotics, Evolution, Energy, and Development) Queiroz, J. and Gudwin, R. (Guest Editors) 2.4 SEMIOSIS IN THE INTERFACE BETWEEN HUMANS AND COMPUTERS Whereas the sign processes within machines considered so far are quasi-semiotic processes, processes in which machines serve as mediators in human semiosis are certainly processes of genuine semiosis. If a traffic sign is a genuine sign to a driver, an automatic traffic light is no less a genuine sign. In this sense, sign processing in the interface between humans and computers is genuine semiosis. Signs are produced by humans, mediated by machines, and interpreted by humans. In this classical communication chain, the computer pertains to the message. The human sender and receiver are either two different persons or one and the same person in a situation of self-communication. In such processes of computer-mediated communication, the computer serves as a semiotic extension of human semiosis. It is used as a most powerful tool for the more efficient manipulation of human semiosis. As such, it is the most recent development in the semiotic extension of humans in a cultural development that began with the invention of painting, writing, printing, phonographs, typewriters, and many other media (cf. Popper 1972: 238-39). However, the messages produced by a computer in the interface of humans and machines are either messages conveyed by a human sender and mediated by the computer or they are quasi-signs resulting from an automatic and deterministic extension of human semiosis. 3 MIND MACHINES VS. MECHANICAL MINDS Nevertheless, it still remains to be determined whether a computer can also be an agent in genuinely semiotic processes. Can it be the source of an “intelligent, or triadic sign action” on its own? Perhaps sign processing in computers is only at its most elementary level reducible to electronic signaling and hence quasi-semiosis, and perhaps the complexity of computer semiosis is as insufficiently described at this level as is the brain when its operations are described as the sequence of positive or negative signals which occur as the input and output of ten billion neural cells? The question whether there can be semiosis in computers is closely related to questions, such as: Can computers think? Do they have intentions or even a mind? Before dealing further with Peirce’s theory of mind and his views on the possibility of genuine semiosis in machines, we will introduce a classical argument against the mindlike agency of computers and contrast it with the counter-argument that machines perform mind work. 3.1 MINDLESS AGENTS IN SEARLE’S CHINESE ROOM The view of the computer as a mere signal processing machine has been defended by John Searle (1980) in mentalist categories. The core of his argument is: a computer working according to a preprogrammed algorithm cannot be a mind machine since it cannot really understand the symbols with which it operates. Searle explains his argument by means of his famous parable of the Chinese room in which messages are processed by people who do not even understand the meaning of the individual words. The servants in this room are monolingual Americans who receive the messages in Chinese, but are nevertheless able to process them on the basis of numerical instructions that tell them how to Special Issue on Computational Intelligence and Semiotics 86 S.E.E.D. Journal (Semiotics, Evolution, Energy, and Development) Queiroz, J. and Gudwin, R. (Guest Editors) combine and correlate the elements of the incoming messages. Consequently, these Americans (alias the computer) do not understand (and hence are not affected by semiosis) “because the formal symbol manipulations by themselves don’t have any intentionality; they are quite meaningless; they aren’t even symbol manipulations, since the symbols don’t symbolize anything. [...] Intentionality as computers appear to have is solely in the minds of those who program them, those who send in the input and those who interpret the output” (Searle 1980: 422). With his parable of the blind agents working mechanically inside the machine without a mind, Searle believes to have given the deathblow to the myth of the computer as a mind machine. However, his argument suffers from a Cartesian bias, namely the assumption that a clear-cut division of labor into mental and mechanical work is possible. His argument is not really valid as an argument against the computer as a mind machine. After all, in order to perform their mechanical work, the poor Americans in the Chinese room must have both minds and intentions. Hence, the work they do must be mind work, and the machine of which they are a metaphor must be a kind of mind machine. 3.2 MIND IN MANUAL, MECHANICAL, AND MENTAL WORK From the point of view of cultural history, a machine has been defined as an apparatus that requires an input of power or energy to perform certain tasks which substitute, and hence save, the labor of humans or animals. A motorcar requires the input of gasoline and saves the labor of humans or horses. A washing machine requires an input of electricity and saves the manual labor of washing by hand. Along these lines of argument, the computer has been defined as a machine which saves mental work (Nake 1992: 185; Santaella 1998: 124). In contrast to former generations of merely mechanical machines, which served to replace manual or muscular work only, the computer must hence be a mind machine since it serves to replace mental work. However, where does manual work end, and where does mental work begin? Can this question be answered without a Cartesian bias? Let us first consider manual work and the various cultural devices which have been invented to substitute it. In fact, the saving of labor begins with devices that were invented long before the first machine. The simple writing instrument of a fountain pen, e.g., is an instrument that saves labor, since the use of its predecessor, the goose or steel pen, required the labor of using an inkstand while writing. Of course, the fountain pen does not require an input of energy, and it is therefore not a writing machine, but only a simple writing tool. Special Issue on Computational Intelligence and Semiotics 87 Is a mechanical typewriter, as the German equivalent Schreibmaschine suggests, a ‘writing machine,’ or is it a mere tool? Since in old-fashioned typewriting there is neither an input of energy nor a real saving of muscular energy in comparison with handwriting, the typewriter is hardly more than a mere writing tool. An electrical typewriter, by contrast, is certainly a machine. It requires electricity as its input and facilitates manual labor by reducing muscular effort. Does it also save mental work like the computer, or does it only economize muscular work? S.E.E.D. Journal (Semiotics, Evolution, Energy, and Development) Queiroz, J. and Gudwin, R. (Guest Editors) If typewriting and handwriting do not differ greatly as to the manual effort that has to be invested in the writing task, why was the typewriter invented at all? Apparently the advantage of typewriting is not so much in the greater ease of writing, but in the greater ease of reading, due to the standardized and regular characters, lines, and paragraphs. Greater ease of reading, however, also means economy of mental work. Hence, the mechanical typewriter, long before the computer, was already a machine that served to economize mental work. Another machine which undoubtedly facilitates mental work is the calculating machine. Calculating is mental work, and a machine that does the calculating for its user is a machine that saves mental work. On the other hand, it is true that usually, without a calculating machine, we calculate the more complex tasks by means of manual operations, writing the numbers in rows and lines in order to break down the complex task into more simple elementary operations. This makes calculating also manual work, and calculating by means of a calculating machine does not only save mental, but also manual work. A machine like the sewing machine appears to be the least likely candidate for a semiotic machine, for it seems to be invented exclusively for the purpose of saving manual labor. However, is the kind of work that it saves, namely sewing by hand, not also mental work? After all, the cutting and the manipulating of the cloth, the needle, and the pin require careful planning and coordination of movements. Thinking is certainly necessary before and while the sewing operations are carried out. In sum, the distinction between manual and mental work is not clear-cut. All machines save mental and manual work. It is not by chance that the areas of the human cortex which coordinate our manual operations are unusually large. Altogether, the cerebral area which coordinates the movements of the human hands and arms is not smaller than the one which coordinates facial expressions and the movements of tongue, lips, and jaw during speech articulation (Geschwind 1982: 112), and this is not really surprising if we consider the evolutionary parallels between manual and communicative activities (LeroiGourhan 1964-65: 188-89). Now, if all machines save mental work and are hence also mental machines, what is the difference between mechanical and human minds? 3.3 REASONING MACHINES AND MECHANICAL MINDS Peirce’s answer to the question of the mind in the machine is differentiated. In spite of his theory of mechanical quasi-semiosis, his argument is: while machines do not work like human minds in all respects, they do so in some respect, but at the same time machines must be seen as operating like mechanical minds. Special Issue on Computational Intelligence and Semiotics 88 In addition to his theory of quasi-semiosis in machines, which stresses the differences between human semiosis and sign processing in machines, Peirce, in his theory of logical machines, also gave an account of the similarities between humans and machines (Ketner 1988; Tiercelin 1993: 228ff.). In contrast to Searle, Peirce argues that the human mind actually works like a machine in certain respects. This argument sounds reductionist, but it certainly does not state that the human mind is a machine. Only when solving a task that a logical or calculating machine can also solve, i.e., when merely following the rules S.E.E.D. Journal (Semiotics, Evolution, Energy, and Development) Queiroz, J. and Gudwin, R. (Guest Editors) of a predetermined algorithm in a quasi-mechanical way, does the human mind work like a machine: All that I insist upon is, that, in like manner, a man may be regarded as a machine which turns out, let us say, a written sentence expressing a conclusion, the man-machine having been fed with a written statement of fact, as premiss. Since this performance is no more than a machine might go through, it has no essential relation to the circumstance that the machine happens to work by geared wheels, while a man happens to work by an ill-understood arrangement of brain-cells. (CP 2.59) In accordance with his synechistic theory of the gradual evolutionary transition between mind and matter, Peirce does not only conclude that the human mind, when solving a mathematical or logical problem, works like a mind machine, but also that the calculating and the logical machines of his time were “reasoning machines.” This similarity between human thought and the merely mechanical “reasoning,” according to Peirce, can be explained by the common evolutionary heritage of biological and physical nature: both the human brain and the physical laws of mechanics have evolved under the same cosmological constraints so that a certain degree of similarity between the operation of both can be assumed (cf. Nöth 2001a, 2002). The mode of sign processing common to humans and machines is diagrammatic iconicity: The secret of all reasoning machines is after all very simple. It is that whatever relation among the objects reasoned about is destined to be the hinge of a ratiocination, that same general relation must be capable of being introduced between certain parts of the machine. (Peirce 1887: 168). In this respect, however, not only a logical machine, but every machine is a reasoning machine, in so much as there are certain relations between its parts, which relations involve other relations that were not expressly intended. A piece of apparatus for performing a physical or chemical experiment is also a reasoning machine, with this difference that it does not depend on the laws of the human mind, but on the objective reason embodied in the laws of nature. Accordingly, it is no figure of speech to say that the alembics and cucurbits of the chemist are instruments of thought, or logical machines. (ibid.) 3.4 (QUASI-)MIND IN THE INKSTAND If not only logical machines, but also all other machines and even technical instruments are instruments of thought endowed with the capacity of reasoning we must conclude that machines evince mind. In fact, Peirce goes so far as to ascribe mind and thought even to the physical world, when he writes: “Thought is not necessarily connected with a brain. It appears in the work of bees, of crystals, and throughout the purely physical world” (CP 4.551). The semiotic theory of mind underlying this affirmation is beyond the scope of the present paper (but see Santaella 1994). We can only focus on some of its aspects in our study of the enigma of the mind in the machine. In this context, it is first of all important to specify that Peirce, when talking about “non-human thought” (CP 4.551) in physical nature, introduces the concept of quasi-mind in order to distinguish between mind in the sense of cognitive psychology and in the processes of semiosis associated with signs “in a very wide sense” (ibid.). Special Issue on Computational Intelligence and Semiotics 89 S.E.E.D. Journal (Semiotics, Evolution, Energy, and Development) Queiroz, J. and Gudwin, R. (Guest Editors) It is hence quasi-semiosis and quasi-mind that we find in the “mind machines” and the “mechanical minds” considered so far. Elsewhere Peirce develops the argument that mind in this wider sense is localized not only in the brain of a writer, but also in the materiality of his semiotic medium, namely ink: A psychologist cuts out a lobe of my brain [...] and then, when I find I cannot express myself, he says, “You see your faculty of language was localized in that lobe.” No doubt it was; and so, if he had filched my inkstand, I should not have been able to continue my discussion until I had got another. Yea, the very thoughts would not come to me. So my faculty of discussion is equally localized in my inkstand. It is localization in a sense in which a thing may be in two places at once. (CP 7.366) The interpretation of this enigmatic quote of 1902 has several dimensions (Skagestad 1993, 1999; Tiercelin 1993: 240), but in our context, Peirce’s argument is first of all relevant that we should look for mind “in two places at once.” In the case of the writing author, one place is his brain, the internal locus of sign production, the other is the inkstand, the locus of the external materialization of the sign. Both loci represent two aspects of semiosis inseparably welded like the two sides of a coin. Justifications of this argument concerning the essential unity of the internal and the external manifestations of the sign can be found in Peirce’s pragmaticism. It provides two keys to the understanding of the enigma of the mind in the inkstand: the theory of the unity of the sign with its external representation and the theory of the unity of thought and action. The theory of the unity of the sign and its representation states that “thought and expression are really one” (CP 1.349). Thought, in the sense of a cerebral engram and its expression in the form of writing manifestation, are two sides of one and the same sign because the written word is not merely an external instrument produced by a human brain and used by a human being for some specific external purpose, as the so-called instrumental theory of the sign would have it (cf. Nöth 2000a). Against the instrumental view of the sign, Peirce argued that the idea or thought conveyed by a sign cannot exist before this sign is externally manifested, but must instead come into existence simultaneously with both the idea and its representation. Nor can meaning in the sense of the interpretant precede the sign since it is the effect, and not the cause, of the sign. If thought does not precede its representation, but comes into semiotic existence simultaneously with it, it would be in vain to search for thought and meaning in the black box of the brain, while there is an external manifestation that testifies to the nature of this thought. Since ideas represented by words, texts, or books do not precede such external manifestations of the sign, Peirce’s conclusion is that the sign cannot be localized in the brain alone, but must also be sought in the signs that result from the cerebral activity. Focusing on the second side of the semiotic coin, Peirce concludes that “it is much more true that the thoughts of a living writer are in any printed copy of his book than that they are in his brain” (CP 7.364). In a different context, where a writer’s style (“good language”) is the topic, Peirce expresses this idea of the unity of sign and thought as follows: “It is wrong to say that a good language is important to good thought, merely; for it is the essence of it” (CP 2.220). The principle of the unity between thought and action provides the other key to the enigma of the mind in the inkstand. The mind of an author cannot be reduced to what Special Issue on Computational Intelligence and Semiotics 90 S.E.E.D. Journal (Semiotics, Evolution, Energy, and Development) Queiroz, J. and Gudwin, R. (Guest Editors) goes on in the brain since the process of writing also comprises the external manual activity of using the medium of ink to produce the written word. – “My pencil is more intelligent than I,” Einstein used to say with reference to the advantages of calculating manually on paper in contrast to mental calculus (cf. Skagestad 1993: 164). Writing and written calculus are not mere semiotic alternatives to speaking and mental calculus, but operations which permit the development of more difficult arguments and the solution of more difficult problems since the fixation of the signs on paper have the advantage of increasing our memory. This effect of the externalization our memory is one of the reasons why thoughts come to a writer while writing on paper. Furthermore, the thoughts that come to us when speaking are not the same as those that we express when writing on the same subject. The difference is manifest in the difference between oral and written style. Today, after McLuhan’s thesis of the message in the medium, we may also presume that the thoughts that come to us when writing by means of a machine are not in all respects the same as those that come to us when our medium of writing is the pen. The conclusion of this line of argument is: on the one hand, there is (quasi-)mind not only in the brain, but also in the machine; on the other hand, this quasi-mind is only a necessary, but not yet a sufficient condition of genuine semiosis. The still missing conditions will be the topic of the final section of this paper. 4 CONTROL, SELF-CONTROL, AND AUTOPOIESIS In spite of their reasoning capacities, the logical machines of the 19 century were still lacking a feature of genuine semiosis that Peirce used to define as self-control. A machine lacks self-control if it is completely controlled by its input. Are all machines of this kind, or are there machines that have begun to take over the control of their own self?

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Semiotic Systems

In this reply to James H. Fetzer’s “Minds and Machines: Limits to Simulations of Thought and Action”, I argue that computationalism should not be the view that (human) cognition is computation, but that it should be the view that cognition (simpliciter) is computable. It follows that computationalism can be true even if (human) cognition is not the result of computations in the brain. I also ar...

متن کامل

Narrating machines and interactive matrices: a semiotic common ground for game studies

Between playing a game and enjoying a narration there is a semiotic and semantic common ground: interpretation and meaning-making. A semiotic methodology to describe situated gaming practices will be presented in three phases. At first, the intuitive concept of "meaning" will be discussed and substituted by the generative semiotic notion of "content". Then the structuralist semiotic notion of "...

متن کامل

Syntactic Semantics and the Proper Treatment of Computationalism

Computationalism should not be the view that (human) cognition is computation; it should be the view that cognition (simpliciter) is computable. It follows that computationalism can be true even if (human) cognition is not the result of computations in the brain. If semiotic systems are systems that interpret signs, then both humans and computers are semiotic systems. Finally, minds can be cons...

متن کامل

Semiotic Systems, Computers, and the Mind: How Cognition Could Be Computing

Computationalism should be the view that cognition is computable; therefore, computationalism can be true even if (human) cognition is not the result of computations in the brain. Semiotic systems should be understood as systems that interpret signs; therefore, both humans and computers are semiotic systems. Minds can be considered as virtual machines implemented in certain semiotic systems, pr...

متن کامل

Chapter I Semiotic Brains and

$EVWUDFW Our brains make up a series of signs and are engaged in making, manifesting or reacting to a series of signs: through this semiotic activity they are at the same time engaged in " being minds " and so in thinking intelligently. An important effect of this semiotic activity of brains is a continuous process of " externalization of the mind " that exhibits a new cognitive perspective on ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Cybernetics and Human Knowing

دوره 9  شماره 

صفحات  -

تاریخ انتشار 2002